Cloud Native Microservices With Kubernetes
$22.99
Minimum price
$27.99
Suggested price

Cloud Native Microservices With Kubernetes

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

About the Book

TL;DR:

In this comprehensive guide, we will dive deep into the intricacies of microservices, high-availability strategies, CI/CD, GitOps, and observability in a Cloud Native world.

We will employ a wide array of tools, including:

  • Docker,
  • Kubernetes,
  • minikube,
  • Rancher,
  • Terraform,
  • Operators,
  • Helm,
  • Prometheus,
  • Istio,
  • Grafana,
  • OpenTelemetry,
  • Jaeger,
  • Loki,
  • Argo CD
  • Ansible
  • and more.

These are the top 10 things you will learn in this guide:

  1. Understand the Cloud Native approach to building software and microservices.
  2. Understand Kubernetes architecture and its core components.
  3. Run Kubernetes locally and in the cloud.
  4. Use Rancher to manage containers and Kubernetes.
  5. Manage data persistence in Kubernetes.
  6. Understand the different types of services in Kubernetes and when to use each one.
  7. Use Operators, Helm, Terraform, and other tools to provision and manage Kubernetes clusters.
  8. Implement deployment strategies such as Blue/Green, Canary, and Rolling updates.
  9. Use Istio to implement a service mesh in Kubernetes.
  10. Implement Observability and GitOps in Kubernetes using Prometheus, Grafana, Jaeger, Loki, OpenTelemetry, Argo CD, and Ansible.

About This Book:

This guide will embark you on an exhilarating journey, unraveling the subtleties and possibilities of Kubernetes - the most popular container orchestration platform in the world. You will discover how to use Kubernetes to build a robust, scalable, and resilient microservices architecture.

"Cloud Native Microservices With Kubernetes" is an insightful guide taking you into the nitty-gritty of Kubernetes and providing help to leverage all its possibilities.

It is designed for a broad range of audiences, from newcomers willing to plunge into Kubernetes, to masters keen on pushing forward the bar of proficiency on this platform while staying tuned with its latest developments.

The chapters are sequentially outlined to ensure easy, progressive learning. The process starts with a review of the available options to run Kubernetes locally and in the cloud.

We will explore how to provision a Kubernetes cluster in the cloud using Minikube, Rancher, and Terraform. Finally, we will provision a cluster with any managed Kubernetes service of a cloud provider.

We will also discuss the Cloud Native app development methodology and some of the basic building blocks of Kubernetes before moving on to more advanced topics such as resource management, autoscaling, and deploying different types of microservices.

This exhaustive guide will deeply dive into all these - microservices intricacies, high-availability strategies, CI/CD, and all things observability in this world of everything cloud native.

We will be using Docker, Kubernetes, minikube, Rancher, Terraform, Operators, Helm, Prometheus, Istio, Grafana, OpenTelemetry, Jaeger, Loki, Ansible, and many other tools.

Our findings will be based on GitOps where we get to learn how establishing a good workflow of GitOps is done using Argo CD. We will easily handle CI/CD (continuous delivery and deployment) strategies such as Blue/Green, Canary deployments, and Rolling updates.

We will also overview different ways data can be managed in Kubernetes starting from persistent volumes to stateful sets. We will guide you through steps for creating different types of services in Kubernetes as well as approaches that may be used to expose those services outside the cluster with the help of Ingress and Service Mesh.

This guide will help you achieve high availability, scalability, efficient deployment, monitoring, CI/CD, and everything else a person needs to build your next microservices architecture. All of this is using the power of Kubernetes and its ecosystem.

Each chapter of the guide has an example and practical exercises to make you feel the atmosphere of using Kubernetes and the like. By doing these examples, you will comprehend the very stem of Kubernetes notions and their real-life implementations.

This guide will provide you with the knowledge and skills necessary for designing, creating, managing, scaling, deploying, and monitoring your microservices on Kubernetes.

Toward the end of this guide, I hope that Kubernetes ceases to be a horrifying maze. Instead, it becomes something you're good at and can use as your tool to write your own success story.

Enjoy the process!

  • Share this book

  • Categories

    • DevOps
    • Docker
    • Distributed Systems
    • Infrastructure as Code
    • Cloud Computing
    • Software Architecture
    • Software Engineering
  • Feedback

    Contact the Author(s)

About the Author

Aymen El Amri
Aymen El Amri

Aymen El Amri is an author, entrepreneur, trainer, and polymath software engineer who has excelled in a range of roles and responsibilities in the field of technology including DevOps & Cloud Native, Cloud Architecture, Python, NLP, Data Science, and more.

Aymen has trained hundreds of software engineers and written multiple books and courses read by thousands of other developers and software engineers.

Aymen El Amri has a practical approach to teaching based on breaking down complex concepts into easy-to-understand language and providing real-world examples that resonate with his audience.

Some projects he founded are FAUN, eralabs.io, and Marketto. You can find Aymen on Twitter and Linkedin.

Table of Contents

    • Cloud Native Microservices: How and Why
      • Common Approaches
      • The Twelve-Factor App
        • Codebase
        • Dependencies
        • Config
        • Backing Services
        • Build, Release, Run
        • Processes
        • Port Binding
        • Concurrency
        • Disposability
        • Dev/Prod Parity
        • Logs
        • Admin Processes
      • Microservices
        • Database per Service
        • API Composition
        • Service Instance per Container
        • Externalized Configuration
        • Server-Side Service Discovery
        • Circuit Breaker
        • Cloud Native
      • From Monolith to Cloud Native Microservices
    • Requirements
      • A Development Server
      • Install Kubectl
    • Kubernetes: Creating a Cluster
      • Creating a Development Kubernetes Cluster Using Minikube
        • Minikube: Installation
        • Minikube: Creating a Cluster
        • Minikube: Profiles
        • Minikube: User Interface
        • Creating a Deployment
        • Kubernetes Events
        • Exposing a Deployment
        • Deleting K8s Resources
        • Minikube: Addons
        • Using Kubectl with Minikube
        • Deleting clusters
      • Creating a Development Kubernetes Cluster Using Rancher
        • Requirements
        • Using Terraform to Launch the Cluster
        • Creating Kubernetes Resources Using Rancher UI
      • Creating an On-Premises Kubernetes Cluster Using Rancher
        • Requirements Before Starting
        • Creating a Cluster Using Rancher Server
        • Notes about High Availability
      • Creating an On-Premises Kubernetes Cluster: Other Options
      • Managed Clusters
      • Creating a Managed DOK Cluster Using Terraform
    • Kubernetes Architecture Overview
      • Introduction
      • The Control Plane
        • Etcd
        • API Server (kube-apiserver)
        • Controller Manager (kube-controller-manager)
        • Cloud Controller Manager (cloud-controller-manager)
        • Scheduler (kube-scheduler)
      • Worker Nodes
        • Kubelet
        • Container Runtime
        • Kube-proxy
      • Nodepools
      • An overview of the architecture
    • Stateless and Stateful Microservices
      • Introduction
      • Stateless Workloads
      • Stateful Workloads
    • Deploying Stateless Microservices: Introduction
      • Requirements
      • Creating a Namespace
      • Creating the Deployment
      • Examining Pods and Deployments
      • Accessing Pods
      • Exposing a Deployment
        • ClusterIP Service
        • NodePort Service
        • LoadBalancer Service
        • Headless Service
        • Ingress Service
    • Deploying Stateful Microservices: Persisting Data in Kubernetes
      • Requirements
      • Creating a Namespace
      • Creating a ConfigMap for the PostgreSQL Database
        • What is a ConfigMap?
        • ConfigMap for PostgreSQL
      • Persisting Data Storage on PostgreSQL
        • Kubernetes Volumes
        • VolumeClaims
        • StorageClass
        • Adding Storage to PostgreSQL
        • Creating a Deployment for PostgreSQL
        • Creating a Service for PostgreSQL
        • Creating a Deployment for our application
        • Creating a Service for Our Application
        • Creating an External Service for Our Application
        • Creating an Ingress for Our Application
      • Checking Logs and Making Sure Everything is Working
      • Summary
    • Deploying Stateful Microservices: StatefulSets
      • What is a StatefulSet?
      • StatefulSet vs Deployment
      • Creating a StatefulSet
      • Creating a Service for the StatefulSet
      • Post Deployment Tasks
      • StatefulSet vs Deployment: Persistent Storage
      • StatefulSet vs Deployment: Associated Service
    • Microservices Patterns: Externalized Configurations
      • Storing Configurations in the Environment
      • Kubernetes Secrets and Environment Variables: Why?
      • Kubernetes Secrets and Environment variables: How?
    • Best Practices for Microservices: Health Checks
      • Health Checks
      • Liveness and Readiness Probes
      • Types of Probes
      • Implementing Probes
    • Microservices Resource Management Strategies
      • Resource Management and Risks: From Docker to Kubernetes
      • Requests and limits
      • CPU Resource Units
      • Memory Resource Units
      • Considerations When Setting Resource Requests and Limits
      • Node Reserve Resources vs Allocatable Resources
      • Quality of Service (QoS) Classes
        • Guaranteed
        • Burstable
        • BestEffort
        • QoS Class of a Pod
        • Eviction Order
        • PriorityClass: A Custom Class
    • Autoscaling Microservices in Kubernetes: Introduction
      • Best Practices for Microservices Scalability
        • Use a Service Registry for Service Discovery
        • Implement health checks
        • Designing for scalability and other best practices
    • Autoscaling Microservices in Kubernetes: Horizontal Autoscaling
      • Horizontal Scaling
      • Horizontal Pod Autoscaler
      • Autoscaling Based on Custom Kubernetes Metrics
      • Autoscaling Based on More Specific Custom Kubernetes Metrics
      • Using Multiple Metrics
      • Autoscaling Based on Custom Non-Kubernetes Metrics
      • Cluster Autoscaler
    • Autoscaling Microservices in Kubernetes: Vertical Scaling
      • Vertical Scaling
      • The Vertical Pod Autoscaler
      • VPA Modes
        • Auto
        • Initial
        • Recreate
        • Off
      • VPA Recommendations
        • VPA Limitations
    • Scaling Stateful Microservices: PostgreSQL as an Example
      • StatefulSets and scaling
      • Introduction to Stolon: PostgreSQL cloud native High Availability and more
      • Stolon: Installation
      • Stolon: Usage
    • Microservices Deployment Strategies: One Service Per Node
      • DaemonSet: Role and Use Cases
      • DaemonSet: Creating and Managing
    • Microservices Deployment Strategies: Assigning Workloads to Specific Nodes
      • Assigning Your Workloads to Specific Nodes: Why?
      • Taints and Tolerations
        • Taints and Tolerations: The Definition
        • Taints and Tolerations: An Example
      • The nodeSelector: A Simple Method to Constrain Pods to Specific Nodes
        • The Simplest Form of Node Affinity
        • nodeSelector: An Example
      • Node Affinity and Anti-Affinity
        • Node Affinity: Like nodeSelector but with More Options
        • Node Affinity: Example
        • An Example of Node Anti-Affinity
        • Affinity Weight
        • Affinity and Anti-Affinity Types
    • Kubernetes: Managing Infrastructure Upgrades and Maintenance Mode
      • Why Do We Need to Upgrade Our Infrastructure?
      • What to Upgrade?
      • Upgrading Worker Nodes: Draining
      • Upgrading Worker Nodes: Cordoning
      • Upgrading Node Pools
      • Zero-Downtime Upgrades: Pod Disruption Budgets
    • Microservices Deployment Strategies: Managing Application Updates and Deployment
      • Cloud Native Practices
      • Deployment Strategies
        • Blue/Green Deployment: Introduction
        • Canary Deployment: Introduction
        • Canary Deployment: An Example Using Istio
        • Canary Deployment: Testing in Production
      • Rolling Updates: Definition
        • Rolling Updates: Example
    • Microservices Observability in a Kubernetes World: Part I
      • Introduction to Observability
      • What is Monitoring?
      • What is Observability and Why is it Important?
      • White-Box Monitoring vs Black-Box Monitoring
      • The Three Pillars of Observability
        • Logs
        • Metrics
        • Tracing
        • Observability Pillars in Action
      • The Four Golden Signals of Monitoring
        • Latency
        • Traffic
        • Errors
        • Saturation
      • Monitoring vs Observability: What’s the Difference?
    • Microservices Observability in a Kubernetes World: Part II
      • Introduction to Prometheus
      • How Prometheus Works?
      • Installing Prometheus
      • Accessing Prometheus Web User Interface
      • Metrics Available in Prometheus
      • Using Grafana to Visualize Prometheus Metrics
      • Promtail: Gathering Logs from Kubernetes Logs
      • Loki Logging Stack
      • Using Loki to Query Logs
      • Using Jaeger and OpenTelemetry for Distributed Tracing
    • GitOps: Cloud Native Continuous Delivery
      • GitOps: Introduction and Definitions
      • GitOps: Benefits and Drawbacks
      • GitOps: Tools and Ecosystem
    • GitOps: Example of a GitOps workflow using Argo CD
      • Argo CD: Introduction
      • Argo CD: Installation and Configuration
      • Argo CD: Creating an Application
      • Argo CD: Automated Synchronization and Self-Healing
      • Argo CD: Roll Back
      • Argo CD: The Declarative Way
      • Argo CD: Configuration Management
      • Argo CD: Managing Different Environments
      • Argo CD: Deployment Hooks
    • Introduction to Helm
      • What is Helm?
      • How to Install Helm
      • What is a Chart?
      • What is a Repository and How Dependencies Work?
      • Installing a Chart
        • Installing From a Specific Repository
        • Using a Chart with a Custom Configuration
        • Using a Chart with Custom Configurations
        • Using a Chart with a Custom Configuration File (values.yaml)
        • Installing a Chart from a Local Directory
        • Installing a Chart from a Local Repository with a Custom Configuration File (values.yaml)
        • Install a Chart from a Local Tarball
        • Install a Chart in a Specific Namespace
        • Dry Run
        • Debug Mode
      • The values.yaml File
      • Upgrading a Chart
      • Listing Releases
      • Uninstalling a Chart
      • Creating a Chart
    • Creating GitOps Pipelines for Microservices - Part I
      • Continuous Integration, Delivery, and Deployment of Microservices
      • CI/CD Tools
        • GitHub Actions
        • Jenkins and Jenkins X
        • Spinnaker
        • Argo CD
        • GitLab CI/CD
      • Creating a CI/CD Pipeline for a Microservice
        • Installing and Configuring Argo CD
        • Creating the Microservice
      • Creating a GitHub Repository for our Microservice
        • Creating a Docker Hub account
        • Setting up GitHub Actions
        • Understanding and Using Helm charts
        • Deploying Helm Charts Using Argo CD
        • Upgrading and Deploying a New Version of the Application
    • Creating GitOps Pipelines for Microservices - Part II
      • Introduction
      • What is Ansible?
      • Prerequisites
      • Automating Helm Values Creation for Multiple Environments Using Ansible
      • Updating the Helm Chart Using Ansible
      • Automating the Argo CD Application YAML File Generation Using Ansible
      • Summary
    • Afterword
      • What’s next?
      • Thank you
      • About the author
      • Join the community
      • Feedback

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

80% Royalties. Earn $16 on a $20 book.

We pay 80% royalties. That's not a typo: you earn $16 on a $20 sale. If we sell 5000 non-refunded copies of your book or course for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earnedover $13 millionwriting, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub